Goto

Collaborating Authors

 investigation process


FAA Framework: A Large Language Model-Based Approach for Credit Card Fraud Investigations

Shuster, Shaun, Zaloof, Eyal, Shabtai, Asaf, Puzis, Rami

arXiv.org Artificial Intelligence

The continuous growth of the e-commerce industry attracts fraudsters who exploit stolen credit card details. Companies often investigate suspicious transactions in order to retain customer trust and address gaps in their fraud detection systems. However, analysts are overwhelmed with an enormous number of alerts from credit card transaction monitoring systems. Each alert investigation requires from the fraud analysts careful attention, specialized knowledge, and precise documentation of the outcomes, leading to alert fatigue. To address this, we propose a fraud analyst assistant (FAA) framework, which employs multi-modal large language models (LLMs) to automate credit card fraud investigations and generate explanatory reports. The FAA framework leverages the reasoning, code execution, and vision capabilities of LLMs to conduct planning, evidence collection, and analysis in each investigation step. A comprehensive empirical evaluation of 500 credit card fraud investigations demonstrates that the FAA framework produces reliable and efficient investigations comprising seven steps on average. Thus we found that the FAA framework can automate large parts of the workload and help reduce the challenges faced by fraud analysts.


Oregon is dropping an artificial intelligence tool used in child welfare system

NPR Technology

Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system. Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system. Child welfare officials in Oregon will stop using an algorithm to help decide which families are investigated by social workers, opting instead for a new process that officials say will make better, more racially equitable decisions.


The Role of AI in Tackling Financial Crime

#artificialintelligence

It enables financial institutions to simplify identifying illicit client relationships, beneficiaries, and links to criminal or terrorist activity during the onboarding phase. FREMONT, CA: Financial regulations globally are cracking down on banks. As Anti Money Laundering and know your customer (KYC) procedures are getting stricter, hefty fines are being imposed on those found to be in breach of the same. Recent studies have discovered that banks across the globe have been charged with a total of USD 26 billion in monetary penalties in Anti Money Laundering (AML) and sanctions violations over the last ten years. As banks and financial institutions continue to search for digital transformation initiatives to streamline and simplify the customer onboarding process and reduce the risk associated with fraud, many are looking to exploit emerging technologies' potential.


Tackling financial crime with AI

#artificialintelligence

Financial regulators around the world are cracking down on banks. With Anti-Money Laundering (AML) and Know-Your-Customer (KYC) procedures being put under the microscope, huge fines are being levied against institutions which are found to be in breach. In fact, recent study discovered that over the past ten years, banks across the globe have been slapped with a total of US$26 billion in monetary penalties for AML and sanctions violations. As banks and financial institutions embark on digital transformation initiatives to streamline and simplify the customer onboarding process and reduce risk associated with fraud, many are eyeing the potential of emerging technologies. This enables financial institutions to simplify the process of identifying illicit client relationships, beneficiaries and links to criminal or terrorist activity during the onboarding phase.


Artificial Intelligence (AI) and Security: A Match Made in the SOC

#artificialintelligence

Change is constant in cybersecurity -- continual, rapid, dynamic change. It's impossible to maintain an effective defensive posture without constantly evolving. Security measures that worked in the past will not be effective today, and today's security controls will not be effective tomorrow. Many factors contribute to this rapid pace of change. Attacks are on the rise, and they are getting more advanced, persistent and stealthy each day, with some attackers even leveraging artificial intelligence (AI) to power their campaigns.


How Artificial Intelligence (AI) Helps Bridge the Cybersecurity Skills Gap

#artificialintelligence

The widespread shortage of skilled security operations and threat intelligence resources in security operations centers (SOCs) leaves many organizations open to the increased risk of a security incident. That's because they are unable to effectively investigate all discovered, potentially malicious behaviors in their environment in a thorough and repeatable way. According to ESG, two-thirds of security professionals believe the cybersecurity skills gap has led to an increased workload for existing staff. "Since organizations don't have enough people, they simply pile more work onto those that they have," wrote ESG Senior Principal Analyst Jon Oltsik. "This leads to human error, misalignment of tasks to skills, and employee burnout."